|
In computer science, an in-place algorithm is an algorithm which transforms input using a data structure with a small amount of extra storage space. The input is usually overwritten by the output as the algorithm executes. An algorithm which is not in-place is sometimes called not-in-place or out-of-place. In-place can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including function calls and pointers. However, this form is very limited as simply having an index to a length n array requires O(log ''n'') bits. More broadly, in-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is O(log ''n''), though sometimes anything in o(''n'') is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (DSPACE), counting pointer lengths. Therefore, the space requirements here have an extra log ''n'' factor compared to an analysis that ignores the length of indices and pointers. An algorithm may or may not count the output as part of its space usage. Since in-place algorithms usually overwrite their input with output, no additional space is needed. When writing the output to write-only memory or a stream, it may make be more appropriate to only consider the working space of the algorithm. In theory applications such as log-space reductions, it is more typical to always ignore output space (in these cases it is more essential that the output is ''write-only''). == Examples == Given an array a of ''n'' items, suppose we want an array that holds the same elements in reversed order and dispose of the original. One seemingly simple way to do this is to create a new array of equal size, fill it with copies from a in appropriate order and then delete a .function reverse(a(- 1 )) allocate b(- 1 ) for i from 0 to n - 1 b(− 1 − i ) := a() return b Unfortunately, this requires O(''n'') extra space for having the arrays a and b available simultaneously. Also, allocation and deallocation are often slow operations. Since we no longer need a , we can instead overwrite it with its own reversal using this in-place algorithm which will only need constant number (2) of integers for the auxiliary variables i and tmp , no matter how large the array is.function reverse_in_place(a()) for i from 0 to floor((n-2)/2) tmp := a() a() := a(− 1 − i ) a(− 1 − i ) := tmp As another example, there are a number of sorting algorithms that can rearrange arrays into sorted order in-place, including: Bubble sort, Comb sort, Selection sort, Insertion sort, Heapsort, Shell sort. These algorithm require only pointers, so their space complexity is O(log''n''). The standard implementation of Quicksort operates in-place on the data to be sorted as it only ever swaps two elements. However, most implementations require O(log ''n'') recursive function calls as part of the divide and conquer strategy. Since each recursive call has pointers of size O(log ''n''), the total space required is O(log2 ''n''). Usually, quicksort is still considered an in-place algorithm, though it does require more space than the sorting algorithms listed earlier. Most selection algorithms are also in-place, although some considerably rearrange the input array in the process of finding the final, constant-sized result. Some text manipulation algorithms such as trim and reverse may be done in-place. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「In-place algorithm」の詳細全文を読む スポンサード リンク
|